augmented state space
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- North America > Canada (0.04)
- (6 more...)
- Asia > Singapore (0.14)
- North America > United States (0.14)
- Europe > Sweden (0.14)
- Europe > France (0.14)
Review for NeurIPS paper: Planning with General Objective Functions: Going Beyond Total Rewards
Additional Feedback: Post feedback response: I appreciate the author feedback. One item I want to flag, though. The feedback said (of one of the reviews): "We are grateful to the reviewer for providing a comprehensive list of papers on non-Markovian reward". I do not think the list is at all "comprehensive". It represents a number of very relevant and very significant papers, but there are others in this area.
Reinforcement Learning With Reward Machines in Stochastic Games
Hu, Jueming, Gaglione, Jean-Raphael, Wang, Yanze, Xu, Zhe, Topcu, Ufuk, Liu, Yongming
We investigate multi-agent reinforcement learning for stochastic games with complex tasks, where the reward functions are non-Markovian. We utilize reward machines to incorporate high-level knowledge of complex tasks. We develop an algorithm called Q-learning with reward machines for stochastic games (QRM-SG), to learn the best-response strategy at Nash equilibrium for each agent. In QRM-SG, we define the Q-function at a Nash equilibrium in augmented state space. The augmented state space integrates the state of the stochastic game and the state of reward machines. Each agent learns the Q-functions of all agents in the system. We prove that Q-functions learned in QRM-SG converge to the Q-functions at a Nash equilibrium if the stage game at each time step during learning has a global optimum point or a saddle point, and the agents update Q-functions based on the best-response strategy at this point. We use the Lemke-Howson method to derive the best-response strategy given current Q-functions. The three case studies show that QRM-SG can learn the best-response strategies effectively. QRM-SG learns the best-response strategies after around 7500 episodes in Case Study I, 1000 episodes in Case Study II, and 1500 episodes in Case Study III, while baseline methods such as Nash Q-learning and MADDPG fail to converge to the Nash equilibrium in all three case studies.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Arizona (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Probabilistic Planning with Risk-Sensitive Criterion
Hou, Ping (New Mexico State University)
While probabilistic planning models have been extensively used by AI and Decision Theoretic communities for planning under uncertainty, the objective to minimize the expected cumulative cost is inappropriate for high-stake planning problems. With this motivation in mind, we revisit the Risk-Sensitive criterion (RS-criterion), where the objective is to find a policy that maximizes the probability that the cumulative cost is within some user-defined cost threshold. The overall scope of this research is to develop efficient and scalable algorithms to optimize the RS-criterion in probabilistic planning problems. In our recent paper (Hou, Yeoh, and Varakantham 2014), we formally defined Risk-Sensitive MDPs (RS-MDPs) and introduced new algorithms for RS-MDPs with non-negative costs. Next, my plan is to develop algorithm for RS-MDPs with negative cost cycles and for Risk-Sensitive POMDPs (RS-POMDPs).